Goto

Collaborating Authors

 big tech


The Download: what Moltbook tells us about AI hype, and the rise and rise of AI therapy

MIT Technology Review

For a few days recently, the hottest new hangout on the internet was a vibe-coded Reddit clone called Moltbook, which billed itself as a social network for bots. As the website's tagline puts it: "Where AI agents share, discuss, and upvote. Launched on January 28, Moltbook went viral in a matter of hours. It's been designed as a place where instances of a free open-source LLM-powered agent known as OpenClaw (formerly known as ClawdBot, then Moltbot), could come together and do whatever they wanted. But is Moltbook really a glimpse of the future, as many have claimed? More than a billion people worldwide suffer from a mental-health condition, according to the World Health Organization. The prevalence of anxiety and depression is growing in many demographics, particularly young people, and suicide is claiming hundreds of thousands of lives globally each year. Given the clear demand for accessible and affordable mental-health services, it's no wonder that people have looked to artificial intelligence for possible relief. Millions are already actively seeking therapy from popular chatbots, or from specialized psychology apps like Wysa and Woebot. Four timely new books are a reminder that while the present feels like a blur of breakthroughs, scandals, and confusion, this disorienting time is rooted in deeper histories of care, technology, and trust. Making AI Work, MIT Technology Review's new AI newsletter, is here For years, our newsroom has explored AI's limitations and potential dangers, as well as its growing energy needs . And our reporters have looked closely at how generative tools are being used for tasks such as coding and running scientific experiments . But how is AI being used in fields like health care, climate tech, education, and finance? How are small businesses using it? And what should you keep in mind if you use AI tools at work? These questions guided the creation of Making AI Work, a new AI mini-course newsletter. Read more about it, and sign up here to receive the seven editions straight to your inbox. The number of civil lawsuits it's pursuing has sharply dropped in comparison to Trump's first term. It's the latest example of Brussels' attempts to rein in Big Tech. Local governments and banks are only too happy to oblige promising startups. Cryptocurrency is now fully part of the financial system, for better or worse. "Agentic engineering" is the next big thing, apparently. Runners had long suspected its suggestions were pushing them towards injury. Only around three dozen supporters turned up. Its menswear suggestions are more manosphere influencer than suave gentleman. "There is no Plan B, because that assumes you will fail.


The Download: why LLMs are like aliens, and the future of head transplants

MIT Technology Review

How large is a large language model? We now coexist with machines so vast and so complicated that nobody quite understands what they are, how they work, or what they can really do--not even the people who build them. Even though nobody fully understands how it works--and thus exactly what its limitations might be--hundreds of millions of people now use this technology every day. To help overcome our ignorance, researchers are studying LLMs as if they were doing biology or neuroscience on vast living creatures--city-size xenomorphs that have appeared in our midst. And they're discovering that large language models are even weirder than they thought. The Italian neurosurgeon Sergio Canavero has been preparing for a surgery that might never happen.


Ed Zitron on big tech, backlash, boom and bust: 'AI has taught us that people are excited to replace human beings'

The Guardian

Ed Zitron on big tech, backlash, boom and bust: 'AI has taught us that people are excited to replace human beings' His blunt, brash scepticism has made the podcaster and writer something of a cult figure. But as concern over large language models builds, he's no longer the outsider he once was I f some time in an entirely possible future they come to make a movie about "how the AI bubble burst", Ed Zitron will doubtless be a main character. He's the perfect outsider figure: the eccentric loner who saw all this coming and screamed from the sidelines that the sky was falling, but nobody would listen. Just as Christian Bale portrayed Michael Burry, the investor who predicted the 2008 financial crash, in The Big Short, you can well imagine Robert Pattinson fighting Paul Mescal, say, to portray Zitron, the animated, colourfully obnoxious but doggedly detail-oriented Brit, who's become one of big tech's noisiest critics. This is not to say the AI bubble burst, necessarily, but against a tidal wave of AI boosterism, Zitron's blunt, brash scepticism has made him something of a cult figure. His tech newsletter, Where's Your Ed At, now has more than 80,000 subscribers; his weekly podcast, Better Offline, is well within the Top 20 on the tech charts; he's a regular dissenting voice in the media; and his subreddit has become a safe space for AI sceptics, including those within the tech industry itself - one user describes him as "a lighthouse in a storm of insane hypercapitalist bullshit".


The Download: Kenya's Great Carbon Valley, and the AI terms that were everywhere in 2025

MIT Technology Review

The Download: Kenya's Great Carbon Valley, and the AI terms that were everywhere in 2025 Welcome to Kenya's Great Carbon Valley: a bold new gamble to fight climate change In June last year, startup Octavia Carbon began running a high-stakes test in the small town of Gilgil in south-central Kenya. It's harnessing some of the excess energy generated by vast clouds of steam under the Earth's surface to power prototypes of a machine that promises to remove carbon dioxide from the air in a manner that the company says is efficient, affordable, and--crucially--scalable. The company's long-term vision is undoubtedly ambitious--it wants to prove that direct air capture (DAC), as the process is known, can be a powerful tool to help the world keep temperatures from rising to ever more dangerous levels. But DAC is also a controversial technology, unproven at scale and wildly expensive to operate. On top of that, Kenya's Maasai people have plenty of reasons to distrust energy companies. This article is also part of the Big Story series: 's most important, ambitious reporting.


Big Tech-Funded AI Papers Have Higher Citation Impact, Greater Insularity, and Larger Recency Bias

Gnewuch, Max Martin, Wahle, Jan Philip, Ruas, Terry, Gipp, Bela

arXiv.org Artificial Intelligence

Over the past four decades, artificial intelligence (AI) research has flourished at the nexus of academia and industry. However, Big Tech companies have increasingly acquired the edge in computational resources, big data, and talent. So far, it has been largely unclear how many papers the industry funds, how their citation impact compares to non-funded papers, and what drives industry interest. This study fills that gap by quantifying the number of industry-funded papers at 10 top AI conferences (e.g., ICLR, CVPR, AAAI, ACL) and their citation influence. We analyze about 49.8K papers, about 1.8M citations from AI papers to other papers, and about 2.3M citations from other papers to AI papers from 1998-2022 in Scopus. Through seven research questions, we examine the volume and evolution of industry funding in AI research, the citation impact of funded papers, the diversity and temporal range of their citations, and the subfields in which industry predominantly acts. Our findings reveal that industry presence has grown markedly since 2015, from less than 2 percent to more than 11 percent in 2020. Between 2018 and 2022, 12 percent of industry-funded papers achieved high citation rates as measured by the h5-index, compared to 4 percent of non-industry-funded papers and 2 percent of non-funded papers. Top AI conferences engage more with industry-funded research than non-funded research, as measured by our newly proposed metric, the Citation Preference Ratio (CPR). We show that industry-funded research is increasingly insular, citing predominantly other industry-funded papers while referencing fewer non-funded papers. These findings reveal new trends in AI research funding, including a shift towards more industry-funded papers and their growing citation impact, greater insularity of industry-funded work than non-funded work, and a preference of industry-funded research to cite recent work.


It's Time to Save Silicon Valley From Itself

WIRED

Big Tech has lost its way. At WIRED's Big Interview event, Techdirt editor Mike Masnick and Common Tools CEO Alex Komoroske announced a manifesto designed to help the industry get back on track. Alex Komoroske has always been at odds with Big Tech's darker side. Though he cut his product-management teeth at Google and Stripe, he was never comfortable with the industry's increasing prioritization of profits over people. Once during his time at Google, he extolled the societal benefits of a project only to be met with, "Oh Alex, you'd be a VP by now if you just stopped thinking through the implications of your actions."


MIKE DAVIS: Congress must stop Big Tech's AI amnesty scam before it's too late

FOX News

Senator Ted Cruz leads efforts to pass AI amnesty through the NDAA, giving Big Tech federal preemption without rules to protect conservatives and children.


Irresponsible AI: big tech's influence on AI research and associated impacts

Hernandez-Garcia, Alex, Volokhova, Alexandra, Williams, Ezekiel, Kabakibo, Dounia Shaaban

arXiv.org Artificial Intelligence

The accelerated development, deployment and adoption of artificial intelligence systems has been fuelled by the increasing involvement of big tech. This has been accompanied by increasing ethical concerns and intensified societal and environmental impacts. In this article, we review and discuss how these phenomena are deeply entangled. First, we examine the growing and disproportionate influence of big tech in AI research and argue that its drive for scaling and general-purpose systems is fundamentally at odds with the responsible, ethical, and sustainable development of AI. Second, we review key current environmental and societal negative impacts of AI and trace their connections to big tech and its underlying economic incentives. Finally, we argue that while it is important to develop technical and regulatory approaches to these challenges, these alone are insufficient to counter the distortion introduced by big tech's influence. We thus review and propose alternative strategies that build on the responsibility of implicated actors and collective action.


The fight to see clearly through big tech's echo chambers

The Guardian

'The encroachment of technology can feel inevitable.' 'The encroachment of technology can feel inevitable.' The fight to see clearly through big tech's echo chambers Today, I'm mulling over whether to upgrade my iPhone 11 Pro. How to see through Silicon Valley's narrative The encroachment of technology can feel inevitable. It may have always, but increasingly it's a perception bolstered by big tech's own friendly media bubble. But at the same time as big tech's echo chambers are growing louder, so do critical voices from within.


EU moves to ease AI, privacy rules amid pressure from Big Tech, Trump

Al Jazeera

The reforms, which amend the AI Act and several other privacy and tech-related laws, would also cut back on website pop-ups asking permission to use cookies and reduce documentation requirements for small and medium-sized businesses. EU tech chief Henna Virkkunen said the changes, which need to be approved by representatives of the 27 EU member states, would boost European competitiveness by simplifying rules about AI, cybersecurity and data protection. "We have talent, infrastructure, a large internal single market. But our companies, especially our start-ups and small businesses, are often held back by layers of rigid rules," Virkkunen said. Lobby groups for tech giants in the United States, where President Donald Trump's administration has been a vocal critic of Europe's regulatory approach, welcomed the move, while lamenting that the measures did not go far enough.